754 research outputs found

    About the rapidity and helicity distributions of the W bosons produced at LHC

    Full text link
    WW bosons are produced at LHC from a forward-backward symmetric initial state. Their decay to a charged lepton and a neutrino has a strong spin analysing power. The combination of these effects results in characteristic distributions of the pseudorapidity of the leptons decaying from W+W^+ and W−W^- of different helicity. This observation may open the possibility to measure precisely the W+W^+ and W−W^- rapidity distributions for the two transverse polarisation states of WW bosons produced at small transverse momentum.Comment: 8 pages, 5 figure

    Study the effect of beam energy spread and detector resolution on the search for Higgs boson decays to invisible particles at a future e+^+e−^- circular collider

    Get PDF
    We study the expected sensitivity to measure the branching ratio of Higgs boson decays to invisible particles at a future circular \epem collider (FCC-ee) in the process e+e−→HZe^+e^-\to HZ with Z→ℓ+ℓ−Z\to \ell^+\ell^- (ℓ=e\ell=e or ÎŒ\mu) using an integrated luminosity of 3.5 ab−1^{-1} at a center-of-mass energy s=240\sqrt{s}=240 GeV. The impact of the energy spread of the FCC-ee beam and of the resolution in the reconstruction of the leptons is discussed. %Two different detector concepts are considered: a detector corresponding to the CMS reconstruction performances and the expected design of the ILC detector. The minimum branching ratio for a 5σ5\sigma observation after 3.5ab−1^{-1} of data taking is 1.7±0.1%(stat+syst)1.7\pm 0.1\%(stat+syst) . The branching ratio exclusion limit at 95\% CL is 0.63±0.22%((stat+syst))0.63 \pm 0.22\%((stat+syst)).Comment: 17 pages, submitted to EPJ

    Variational Autoencoders for New Physics Mining at the Large Hadron Collider

    Get PDF
    Using variational autoencoders trained on known physics processes, we develop a one-sided threshold test to isolate previously unseen processes as outlier events. Since the autoencoder training does not depend on any specific new physics signature, the proposed procedure doesn't make specific assumptions on the nature of new physics. An event selection based on this algorithm would be complementary to classic LHC searches, typically based on model-dependent hypothesis testing. Such an algorithm would deliver a list of anomalous events, that the experimental collaborations could further scrutinize and even release as a catalog, similarly to what is typically done in other scientific domains. Event topologies repeating in this dataset could inspire new-physics model building and new experimental searches. Running in the trigger system of the LHC experiments, such an application could identify anomalous events that would be otherwise lost, extending the scientific reach of the LHC.Comment: 29 pages, 12 figures, 5 table

    Adversarially Learned Anomaly Detection on CMS Open Data: re-discovering the top quark

    Full text link
    We apply an Adversarially Learned Anomaly Detection (ALAD) algorithm to the problem of detecting new physics processes in proton-proton collisions at the Large Hadron Collider. Anomaly detection based on ALAD matches performances reached by Variational Autoencoders, with a substantial improvement in some cases. Training the ALAD algorithm on 4.4 fb-1 of 8 TeV CMS Open Data, we show how a data-driven anomaly detection and characterization would work in real life, re-discovering the top quark by identifying the main features of the t-tbar experimental signature at the LHC.Comment: 16 pages, 9 figure

    Data Augmentation at the LHC through Analysis-specific Fast Simulation with Deep Learning

    Full text link
    We present a fast simulation application based on a Deep Neural Network, designed to create large analysis-specific datasets. Taking as an example the generation of W+jet events produced in sqrt(s)= 13 TeV proton-proton collisions, we train a neural network to model detector resolution effects as a transfer function acting on an analysis-specific set of relevant features, computed at generation level, i.e., in absence of detector effects. Based on this model, we propose a novel fast-simulation workflow that starts from a large amount of generator-level events to deliver large analysis-specific samples. The adoption of this approach would result in about an order-of-magnitude reduction in computing and storage requirements for the collision simulation workflow. This strategy could help the high energy physics community to face the computing challenges of the future High-Luminosity LHC.Comment: 15 pages, 12 figure

    Variational Autoencoders for New Physics Mining at the Large Hadron Collider

    Get PDF
    Using variational autoencoders trained on known physics processes, we develop a one-sided threshold test to isolate previously unseen processes as outlier events. Since the autoencoder training does not depend on any specific new physics signature, the proposed procedure doesn’t make specific assumptions on the nature of new physics. An event selection based on this algorithm would be complementary to classic LHC searches, typically based on model-dependent hypothesis testing. Such an algorithm would deliver a list of anomalous events, that the experimental collaborations could further scrutinize and even release as a catalog, similarly to what is typically done in other scientific domains. Event topologies repeating in this dataset could inspire new-physics model building and new experimental searches. Running in the trigger system of the LHC experiments, such an application could identify anomalous events that would be otherwise lost, extending the scientific reach of the LHC

    New Physics Agnostic Selections For New Physics Searches

    Get PDF
    We discuss a model-independent strategy for boosting new physics searches with the help of an unsupervised anomaly detection algorithm. Prior to a search, each input event is preprocessed by the algorithm - a variational autoencoder (VAE). Based on the loss assigned to each event, input data can be split into a background control sample and a signal enriched sample. Following this strategy, one can enhance the sensitivity to new physics with no assumption on the underlying new physics signature. Our results show that a typical BSM search on the signal enriched group is more sensitive than an equivalent search on the original dataset
    • 

    corecore